可穿戴机器人设备有可能协助和保护用户。为了设计智能头盔,本文研究了音频和视觉警告的有效性,以帮助参与者振作起来。一项用户研究检查了运行时对用户应用的不同警告和影响。从不同的方向应用了缩放到用户质量的扰动力,并测量用户位移以表征警告的有效性。这是使用适应于运动循环期间精确矩,向前,向后,右或左侧扰动力来向前,向后,右或左侧扰动力进行的踏板活动的活动风洞来完成的。本文介绍了该系统的概述,并展示了步态过程中精确发出一致警告和扰动的能力。用户研究结果突出了视觉和音频警告的有效性,以帮助用户振作起来,从而导致指南,从而为未来的人类机器人警告系统提供信息。
translated by 谷歌翻译
折扣是指严重的情况,即车辆拖车系统进入折刀状态,如果不纠正拖车操作,车辆和拖车最终会发生碰撞。本文考虑了拖车衬里的典型拖车拖车的操纵,在该机车,拖车和环境之间的物理互动引起的侧滑导致折刀状态限制可能会有所不同。考虑在车辆和拖车车轮上侧滑的运动学模型的分析表明,应根据挂钩长度和拖车舌长度的比率将车辆拖车系统分为三类,每个比例都具有不同的行为。长拖车类别可能没有具型状态,而其他两个类别总是具有导致折扣的状态。据发现,折刀限制是具型式状态和可回收区域之间的边界,可以分为安全且不安全的限制,后者必须避免。模拟和物理实验支持这些结果,并提供有关车辆和拖车状态的含义的洞察力,从而导致折扣。仿真还证明了在拖车背衬控制中考虑这些新的基于滑动的折刀限制的好处。
translated by 谷歌翻译
这项研究的重点是用于监视和控制用户头部和智能头盔外壳之间的相互作用的软机器人膀胱。这些膀胱的压缩决定了影响耗散;因此,本文的重点是膀胱压缩的感应和估计。使用回归技术和神经网络评估基于IR测距仪的解决方案,以估计膀胱压缩。还检查了霍尔效应(HE)磁传感系统,其中传感器嵌入膀胱底部,感觉磁体在膀胱顶部的位置。本文介绍了HE传感器阵列,HE电压数据的信号处理,然后介绍了用于预测膀胱压缩的神经网络(NN)。研究了不同培训数据集关于NN性能的功效。检查了不同的NN配置,以确定与尽可能少的节点提供准确估计值的配置。评估不同的膀胱压缩曲线以表征IR范围查找,并在应用程序方案中基于HE的技术。
translated by 谷歌翻译
本文评估了源自地形显示触觉设备“智能鞋”的软机器人气动执行器的功能。智能鞋的膀胱设计升级为包括压力供应和更大的输出流量功能。创建了基准顶部设置,以严格测试这种新型的执行器。该新执行器的带宽和刚度能力相对于人类步态期间遇到的力和位移进行了评估。提出了与触觉地形显示相关的四个力与位移曲线,并使用滑动模式跟踪控制进行了测试。发现执行器可以维持类似于混凝土上软缝鞋的刚度,以及其他地形(沙子,污垢等),而7.3 Hz的带宽却低于10 Hz的目标带宽。膀胱以20 mm/s进行的压缩,与人步态的速度相似,在跟踪所需力轨迹方面显示出令人鼓舞的结果。本文的结果表明,该执行器能够显示触觉地形轨迹,为将来可穿戴的触觉地形显示器提供了基础。
translated by 谷歌翻译
由于这些要求的竞争性质,尤其是在一系列的运行速度和条件下,在转向控制中的准确性和误差融合与优美运动的平衡路径与优美的运动具有挑战性。本文表明,考虑滑移对运动学控制,动态控制和转向执行器速率命令的影响的集成多层转向控制器可实现准确且优美的路径。这项工作建立在多层侧滑和基于YAW的模型上,该模型允许派生控制器考虑由于侧滑而引起的误差以及转向命令和优美的侧向运动之间的映射。基于观察者的侧滑估计与运动控制器中的标题误差相结合,以提供前馈滑移补偿。使用基于速度的路径歧管,通过连续变量结构控制器(VSC)来补偿路径以下误差,以平衡优雅的运动和误差收敛。后台动态控制器使用结果偏航率命令来生成转向率命令。高增益观察者(HGO)估计输出反馈控制的侧滑和偏航率。提供了输出反馈控制器的稳定性分析,并解决了峰值。该工作仅针对侧向控制,因此转向控制器可以与其他速度控制器结合使用。现场结果提供了与相关方法的比较,这些方法在不同的复杂情况下证明了具有不同天气条件和扰动的不同复杂情况。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated strong performance in zero-shot reasoning tasks, including abductive reasoning. This is reflected in their ability to perform well on current benchmarks in this area. However, to truly test the limits of LLMs in abductive reasoning, a more challenging benchmark is needed. In this paper, we present such a benchmark, consisting of 191 long-form mystery stories, each approximately 1200 words in length and presented in the form of detective puzzles. Each puzzle includes a multiple-choice question for evaluation sourced from the "5 Minute Mystery" platform. Our results show that state-of-the-art GPT models perform significantly worse than human solvers on this benchmark, with an accuracy of 28\% compared to 47\% for humans. This indicates that there is still a significant gap in the abductive reasoning abilities of LLMs and highlights the need for further research in this area. Our work provides a challenging benchmark for future studies on reasoning in language models and contributes to a better understanding of the limits of LLMs' abilities.
translated by 谷歌翻译
Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high stakes, human decision making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how peoples expertise in a domain affects their understanding of post-hoc explanations by example for a deep-learning, black box classifier. The results show that peoples understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada MNIST). The wider implications of these new findings for XAI strategies are discussed.
translated by 谷歌翻译
While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to produce a single sample. This is in stark contrast to state-of-the-art generative image models, which produce samples in a number of seconds or minutes. In this paper, we explore an alternative method for 3D object generation which produces 3D models in only 1-2 minutes on a single GPU. Our method first generates a single synthetic view using a text-to-image diffusion model, and then produces a 3D point cloud using a second diffusion model which conditions on the generated image. While our method still falls short of the state-of-the-art in terms of sample quality, it is one to two orders of magnitude faster to sample from, offering a practical trade-off for some use cases. We release our pre-trained point cloud diffusion models, as well as evaluation code and models, at https://github.com/openai/point-e.
translated by 谷歌翻译
In many high-dimensional prediction or classification tasks, complementary data on the features are available, e.g. prior biological knowledge on (epi)genetic markers. Here we consider tasks with numerical prior information that provide an insight into the importance (weight) and the direction (sign) of the feature effects, e.g. regression coefficients from previous studies. We propose an approach for integrating multiple sources of such prior information into penalised regression. If suitable co-data are available, this improves the predictive performance, as shown by simulation and application. The proposed method is implemented in the R package `transreg' (https://github.com/lcsb-bds/transreg).
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译